Bot Summary Dashboard
The Bot Summary Dashboard provides bot authors with insights on bot performance over time and enables them to identify actions they need to take to improve bot performance.
Druid offers a Dashboard template with predefined widgets, which provides you various ways (dashboard pages) of analyzing your bot performance:
- Overview – get a summary of bot KPIs around the conversation history data.
- Engagement – provides metrics that help you understand how people engage with your chatbot.
- Origin - provides metrics that are stored in the [[QueryParams]] system entity.
- Contact Center – provides Live Chat metrics that help you analyze the performance and efficiency of your Contact Center.
- Automations – provides insights related to Druid integrations.
- NLU – provides information that helps you analyze the performance of your bot’s NLP model.
The dashboard template contains only predefined Druid KPIs. If you want to customize the dashboard with your own KPIs, see Customizing the dashboard with custom KPIs.
Accessing the Bot Summary Dashboard
To access the Dashboard, select your bot and from the Dashboards menu, click Bot Summary.
Engagement KPIs
Metric name | Metric display name | Widget Type | Description |
---|---|---|---|
Messages | Total | KPI | The number of messages exchanged with the chatbot, both user says and chatbot says. |
Number of interactions | Total | KPI |
The number of user-chatbot interactions, in which the user opened the chatbot bubble, but not necesarry continued the conversation with the chatbot. |
Number of interactions | Engaged | KPI | The number of user-chatbot interactions, in which the user opened the chatbot and sent at least one message. |
Number of interactions | Dropped | KPI |
The number of user-chatbot interactions, in which the user abandoned the conversation at a specific moment in time. It can be either when the chatbot asked for input, or after the special flow Inactivity Cancel Conversation was triggered. |
Number of conversations | Total | KPI | The number of conversations in which the user opened the chatbot widget and sent at least one message to the bot. |
Users | Total | KPI | Total number of users (both unique authenticated users and anonymous users) who interacted with the chatbot. |
Users | Active | KPI | Total number of users who interacted with the chatbot and sent at least one message to the bot. |
Users | Authenticated | KPI | Total number of unique authenticated users who interacted with the chatbot and sent at least one message to the bot. |
Users | Anonymous | KPI | Total number of anonymous users who interacted with the chatbot and sent at least one message to the bot. |
Users |
Inactive |
KPI |
Total number of unique authenticated users who were inactive (did not send a message to the bot). |
Conversation duration | AVG | KPI | The amount of time (in minutes) a conversation lasts on average between a user and the chatbot. The metric also includes the live chat conversations. |
Conversation turns | AVG | KPI | The number of messages exchanged on average in a user-bot conversation. The metric also includes the live chat conversations. |
Conversation turns | max | KPI | The maximum number of messages exchanged in a user-bot conversation. The metric also includes the live chat conversations. |
Flows | AVG Flows per user | KPI | The number of flows a user interacts with on average in the conversation with the chatbot. |
Flows | AVG Flow Step per user | KPI | The number of flow steps a user interacts on average in the conversation with the chatbot. |
Feedback flows | Requested | KPI |
The number of times the feedback flow was initiated (0-Requested). Important! This KPI takes data from the [[ChatActivityData]] entity. As a best practice, you can download from the Solutions Library, the Customer Satisfaction solution template that contains this configuration. Otherwise, if you want to configure it yourself, make sure that:
|
Feedback Flows | Answered |
KPI |
The number of conversations that requested feedback and have an answer. Important! This KPI takes data from the [[ChatActivityData]] entity. As a best practice, you can download from the Solutions Library, the Customer Satisfaction solution template that contains this configuration. Otherwise, if you want to configure it yourself, make sure that:
|
Feedback Flows | Positive | KPI |
The number of conversations that requested feedback and a positive answer was provided (4/5-Positive). Important!
This KPI takes data from the [[ChatActivityData]] entity. As a best practice, you can download from the Solutions Library, the Customer Satisfaction solution template that contains this configuration. Otherwise, if you want to configure it yourself, make sure that:
Note: For DRUID version 1.71 and higher, this KPI has a redirect to the conversation history data (Conversation History V2) which has the Context Data filter set on the respective conversations.
|
Feedback flows | Negative | KPI |
The number of conversations that requested feedback and a negative answer was provided (1/2/3-Negative). Important! This KPI takes data from the [[ChatActivityData]] entity. As a best practice, you can download from the Solutions Library, the Customer Satisfaction solution template that contains this configuration. Otherwise, if you want to configure it yourself, please make sure that:
Note: For DRUID version 1.71 and higher, this KPI has a redirect to the conversation history data (Conversation History V2) which has the Context Data filter set on the respective conversations.
|
Top flows | - | Table | Counts the top flows used in conversations, ordered descending. |
Channel (active users) | Web, Skype, Facebook, Slack, WhatsApp, Teams | Chart | The total number of users who interacted with the chatbot and sent at least one message to the bot, grouped by channel. |
TTA | 1,2,3,4,5,>5 | Table | Counts how fast the chatbot responded to the user messages, spitted by seconds (1,2,3,4,5,>5) and counted only one time. The metric does not take into consideration the live chat conversations. |
TTA drivers, flows | - | Table | Top 10 flows that generated the longest chatbot response time (>5 seconds, grouped by flow name and ordered descending. |
TTA drivers, integrations | - | Table | Top 10 integrations that generated the longest chatbot response time (>5 seconds), grouped by Druid integration name and ordered descending. |
Feedback Flows | 1,2,3,4,5 | Chart | Counts the number of feedbacks registered for each evaluation score and displays them in a bar chart. |
Feedback Flows | Positive/Negative/Requested | Chart | Displays the Feedback Flows KPIs in a time series chart. |
Number of conversations | With Live Chat, Without Live Chat | Chart | Counts the number of conversations that had live chat with a helpdesk agent and the number of conversations without. |
Number of interactions dropped, drivers | - | Chart | Top flows, including flow name and flow steps in which the user abandoned more the conversation . |
Origin
Metric name | Metric display name | Type | Description |
---|---|---|---|
Mobile no | Mobile | KPI | Total number of conversations that were initiated from Direct Line (web channel). |
Mobile Yes | Mobile | KPI | Total number of conversations that were initiated from mobile devices. |
Mobile | Mobile | Chart | Displays the daily count of conversations initiated through Direct Line and mobile devices. |
Origin | Origin | Chart | Displays the daily count of conversations initiated from the base URLs of webpages hosting the web chat. |
Origin | Origin | Table | Displays the top 10 base URLs of webpages hosting the web chat, indicating where conversations were initiated. |
App | App | Chart |
Displays the daily count of conversations based on the browser name from which the web chat was initiated. |
OSVersion | OSVersion | Chart | Displays the daily count of conversations based on the operating system of the device from which the web chat was initiated. |
Path Name | Path Name | Table | Displays the top 10 full paths to the web pages from which the user initiated the chat. |
Contact Center KPIs
Metric name | Metric display name | Type | Description |
---|---|---|---|
Conversations | Total | KPI | Total number of conversations in which a live chat connection was initiated. |
Conversations | Answered | KPI | Total number of conversations in which a live chat connection was initiated and on the other side the agent answered / connected to the waiting client. |
Conversations | Missed | KPI | Total number of conversations in which a live chat connection was initiated and on the other side the agent couldn't connect to the waiting client. |
Avg wait live chat | Total | KPI | The average waiting time in a queue before the client is redirected to a live chat conversation with a helpdesk agent. |
Avg live chat conversation duration | Total |
KPI |
KPI that calculates how much time lasts a live chat conversation. Calculated as average from total number of live chat conversations. |
Chat events | Agent Login | KPI | Event triggered every time a helpdesk agent clicks the Log In button and becomes an available agent for taking over live chat conversations. |
Chat events | Client In Queue | KPI | Event triggered every time a user is placed in a waiting queue. |
Chat events | Client disconnect from queue | KPI | Event triggered every time Druid automatically disconnects a user from a waiting queue, because the conversation could not be connected with a helpdesk agent. |
Feedback LiveChat | Requested | KPI |
The number of feedback flows requested for each live chat conversation ([[ChatActivityData]].FeedbackLiveChat equals 0, that is, Requested). Important! This KPI takes data from the [[ChatActivityData]] entity. As a best practice, you can download from the Solutions Library, the LiveChat Advanced solution template that contains this configuration. Otherwise, if you want to configure it yourself, make sure that:
|
Feedback LiveChat |
Answered |
KPI |
The number of live chat conversations that requested feedback and have an answer. Important! This KPI takes data from the [[ChatActivityData]] entity. As a best practice, you can download from the Solutions Library, the LiveChat Advanced solution template that contains this configuration. Otherwise, if you want to configure it yourself, make sure that:
|
Feedback LiveChat |
Positive |
KPI |
The number of live chat conversations that requested feedback and have a positive answer, [[ChatActivityData]].FeedbackLiveChat equals 4 or 5 (4/5-Positive). Important! This KPI takes data from the [[ChatActivityData]] entity. As a best practice, you can download from the Solutions Library, the LiveChat Advanced solution template that contains this configuration. Otherwise, if you want to configure it yourself, make sure that:
Note: For DRUID version 1.71 and higher, this KPI has a redirect to the conversation history data (Conversation History V2) which has the Context Data filter set on the respective conversations.
|
Feedback LiveChat |
Negative |
KPI |
The number of live chat conversations that requested feedback and have a negative answer, [[ChatActivityData]].FeedbackLiveChat equals 1, 2 or 3 (1/2/3-Negative). Important! This KPI takes data from the [[ChatActivityData]] entity. As a best practice, you can download from the Solutions Library, the LiveChat Advanced solution template that contains this configuration. Otherwise, if you want to configure it yourself, make sure that:
Note: For DRUID version 1.71 and higher, this KPI has a redirect to the conversation history data (Conversation History V2) which has the Context Data filter set on the respective conversations.
|
Conversations |
Missed/ Answered |
Chart |
The number of live chat conversations that missed and answered daily displayed in a time series chart. |
Total number of conversations per queue |
- |
Chart |
The total number of live chat conversations registered for each queue displayed in a time series chart. |
Avg wait live chat |
- |
Chart |
Chart displaying on a time series what is the average waiting time for an agent, grouped by queue. |
Feedback LiveChat |
- |
Chart |
Counts the [[ChatActivityData]].FeedbackLiveChat grouped by feedback rating score. |
Agent resolution |
- |
Chart |
Count the [[ChatActivityData]].AgentResolution, grouped by agent resolution. Important! This KPI takes data from the [[ChatActivityData]] entity, therefore make sure that:
|
Avg waiting time (s) for agent per hour |
- |
Chart |
The average waiting time (in seconds) of a helpdesk agent, displayed in a chart split by hours during a day. |
Total livechat conversation per agent (hours and minutes) |
- |
Table |
The amount of time spent in live chat conversations, grouped by agent. |
Avg waiting time for agent response |
- |
Table |
The average time spent by a helpdesk agent to connect to a user waiting in the queue. |
Avg agent TTA |
|
Table |
The average time spent to respond to the user messages in a live chat conversation, grouped by agent. |
Waiting for agent |
0-1m, 1-3m,3-5m, 5-10m,10-15m,>15m |
Table |
The number of waiting conversations, grouped by time slot. |
Number of conversations per time slot |
0-5m, 5-10m, 10-15m, >15m |
Table |
The number of live conversations, grouped by duration. |
Avg conversation turns per time slot |
0-5m, 5-10m, 10-15m, >15m |
Table |
The number of messages sent on average during a time slot. |
Automations KPIs
Metric name |
Metric display name |
Type |
Description |
---|---|---|---|
ROI |
Savings |
KPI |
Calculates how many euros you saved when using the chatbot on specific business flows. For Druid version 1.61 and higher, you can change the currency from the bot details, Analytics section (see Bot Advanced Settings). |
ROI |
Revenue |
KPI |
Calculates the revenue generated when using the chatbot on specific business flows. For Druid version 1.61 and higher, you can change the currency from the bot details, Analytics section (see Bot Advanced Settings). |
ROI |
FTE |
KPI |
Calculates how many hours you saved when using the chatbot on specific business flows. |
Integration error |
Total |
KPI |
The total number of integrations (connectors), flows and apps that triggered an integration error. |
Integration errors, drivers |
- |
Chart |
The total number of integrations that triggered an integration error, grouped by Druid integration name. |
Integration errors, flows drivers |
- |
Chart |
The total number of flows that triggered an integration error, grouped by flow name. |
Integration errors, apps drivers |
- |
Chart |
The total number of apps that triggered an integration error, grouped by app name. |
ROI Cost Saving Drivers |
- |
Table | Shows which flow/connector generated the expected ROI Cost Saving value set on the flow/connector. You can filter drivers by using group tags. For more information, see Analytics. |
ROI Revenue Drivers |
- |
Table | Shows which flow/connector generated the expected ROI Revenue value set on the flow/connector. You can filter drivers by using group tags. For more information, see Analytics. |
ROI FTE Drivers |
- |
Table | Shows which flow/connector generated the expected ROI FTE (Full Time Equivalent) value set on the flow/connector. You can filter drivers by using group tags. For more information, see Analytics. |
NLU KPIs
Metric name |
Metric display name |
Type |
Description |
---|---|---|---|
NLU Accuracy Buckets |
100 |
KPI |
The number of messages that had an accuracy score at 100. Note: This metric is no longer available in DRUID version 1.74 and higher.
|
NLU Accuracy Buckets |
90-100 |
KPI |
The number of messages that had an accuracy between 90-100. Note: This metric is no longer available in DRUID version 1.74 and higher.
|
Messages |
Intents not recognized |
KPI, chart |
The messages said by the users, the chatbot replied to with the special flow “Intent not recognized”. |
Messages |
Conversation paths not found |
KPI, chart |
The number of inputs that don't match the flow step validation criteria, hero or choice steps. The chatbot tries to understand what the user says and convert into expected format. If it fails, it responds the user with “I do not understand, please repeat”. |
Messages |
Invalid inputs |
KPI |
The number of messages where inside a dialog the user input was not the one the chatbot expected. If for example, we have a prompt step that expects in input mapping a date of birth (type:DateTime) and the user inputs the name (type:string), the chatbot will trigger the special message InputValueInterpretationError. |
Messages |
Security violation |
|
The number of user says that trigger a flow the user is not allowed to use. For example, you have “Activate contract” flow that only users with Role Account Manager can use. The chatbot uses NLP to identify the user intent and triggers the respective flow. |
Conversation path not found drivers |
- |
Chart |
The number of inputs that don't match the flow step validation criteria, grouped by flow and step name. |
Invalid input drivers |
- |
Chart |
The number of messages where inside a dialog the user input was not the one the chatbot expected, grouped by flow and step name. |
NLU Accuracy | Percent Accuracy | % |
The percent of correctly matched phrases by the conversational model. Note: The metric is available in DRUID 1.74 and higher.
|
NLU Accuracy | Percent Errors | % |
The percent of phrases matched incorrectly by the conversational model. Note: The metric is available in DRUID 1.74 and higher.
|
NLU Accuracy | Total Accuracy | Sum |
Total number of correctly matched phrases. Note: The metric is available in DRUID 1.74 and higher.
|
NLU Accuracy | Total Errors | Sum |
Total number of incorrectly matched phrases. Note: The metric is available in DRUID 1.74 and higher.
|
NLU Accuracy Buckets | - |
Chart |
The number of correctly matched phrases, with their corresponding probability bucket. Note: The metric is available in DRUID 1.74 and higher.
|
NLU Accuracy Buckets False | - |
Chart |
The number of incorrectly matched phrases, in their corresponding probability bucket. Note: The metric is available in DRUID 1.74 and higher.
|
NLU Accuracy | - | Chart |
The number of true and false matched phrases by the conversational model:
Note: The metric is available in DRUID 1.74 and higher.
|
Filtering dashboard data
To filter dashboard data, select the date range for which you want to monitor your chatbot usage:
- Click the Filter button on top of the dashboard. The Dashboard Filters pop-up appears.
- Click on the Date range field. The period selector appears.
- Select the desired period.
- Click the Apply button.
The Dashboard displays the metrics historical data for the selected chatbot within the specified date range. Historical data means that if today is selected within the date range, Druid calculates KPI values for events beginning the first day selected (from date) until the past 24 hours. If you select Today as Date range, the dashboard shows KPI vales for the past 24 hours.
Enabling Time Series KPIs
By default, time series is disabled. For time series KPIs to register data on the Dashboard, you need to enable time series.
To do so, follow these steps:
- On the right-top corner of the dashboard, click the Settings icon (
).
- Tap on Time series enabled. A pop-up appears.
- Click Yes.
The Dashboard Settings page appears.
By default, if you do not select a specific period, Druid runs an automatic job in the background at the end of each day, which calculates the KPIs using historical data from the past 30 days. KPI data from today will be registered on the dashboard widgets tomorrow. Historical data means, Druid uses today’s data in KPIs calculation at the end of the day and will reflect the data on the dashboard tomorrow.
If you want Druid to calculate time series KPIs using historical data within a specific period (other than the last 30 days), from the Period range field, select the desired period and click the Re-calculate button.
Druid starts calculating the KPIs using historical data within the selected period.
Click the Refresh button ().
When the status is “Completed”, Druid finished calculating the KPIs within the specified period.
On the dashboard, the time series KPIs will show data registered within the specified period.